Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

train: allow passing custom learning rate optimizer #305

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

breznak
Copy link
Contributor

@breznak breznak commented Jan 26, 2020

  • allows setting customized optimizer train(optimizer=torch.optim.MyCustomOptimizer(..))
  • TODO add early-stopping
  • TODO ideally avoid extra steps for learning-rate management (leave it to the optimizer)

For #298

def train(optimizer=None):
"""
@param optimizer: set custom optimizer, default (None) uses
`torch.optim.SGD(net.parameters(), lr=args.lr, momentum=args.momentum, weight_decay=args.decay)`
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ideally, I'd set the optimizer=SGD(...) here already, but the "net" is not available

@@ -291,11 +297,11 @@ def train():
if changed:
cfg.delayed_settings = [x for x in cfg.delayed_settings if x[0] > iteration]

# Warm up by linearly interpolating the learning rate from some smaller value
# Warm up by linearly interpolating the learning rate from some smaller value
if cfg.lr_warmup_until > 0 and iteration <= cfg.lr_warmup_until:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we leave this fine tuning, lr management to the optimizer (or other code provided by the framework)?

@breznak breznak requested a review from dbolya January 26, 2020 20:37
@breznak
Copy link
Contributor Author

breznak commented Jan 26, 2020

Please feel free to take over this, I'll be out for the next week

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant